198 research outputs found

    Envelope Word and Gap Sequence in Doubling Sequence

    Full text link
    Let Ο‰\omega be a factor of Doubling sequence D∞=x1x2β‹―D_\infty=x_1x_2\cdots, then it occurs in the sequence infinitely many times. Let Ο‰p\omega_p be the pp-th occurrence of Ο‰\omega and Gp(Ο‰)G_p(\omega) be the gap between Ο‰p\omega_p and Ο‰p+1\omega_{p+1}. In this paper, we discuss the structure of the gap sequence {Gp(Ο‰)}pβ‰₯1\{G_p(\omega)\}_{p\geq1}. We prove that all factors can be divided into two types, one type has exactly two distinct gaps G1(Ο‰)G_1(\omega) and G2(Ο‰)G_2(\omega), the other type has exactly three distinct gaps G1(Ο‰)G_1(\omega), G2(Ο‰)G_2(\omega) and G4(Ο‰)G_4(\omega). We determine the expressions of gaps completely. And also give the substitution of each gap sequence. The main tool in this paper is "envelope word", which is a new notion, denoted by Em,iE_{m,i}. As an application, we determine the positions of all Ο‰p\omega_p, discuss some combinatorial properties of factors, and count the distinct squares beginning in D∞[1,N]D_\infty[1,N] for Nβ‰₯1N\geq1.Comment: 14 pages, 7 figures. arXiv admin note: text overlap with arXiv:1408.372

    Mitigate Replication and Copying in Diffusion Models with Generalized Caption and Dual Fusion Enhancement

    Full text link
    While diffusion models demonstrate a remarkable capability for generating high-quality images, their tendency to `replicate' training data raises privacy concerns. Although recent research suggests that this replication may stem from the insufficient generalization of training data captions and duplication of training images, effective mitigation strategies remain elusive. To address this gap, our paper first introduces a generality score that measures the caption generality and employ large language model (LLM) to generalize training captions. Subsequently, we leverage generalized captions and propose a novel dual fusion enhancement approach to mitigate the replication of diffusion models. Our empirical results demonstrate that our proposed methods can significantly reduce replication by 43.5% compared to the original diffusion model while maintaining the diversity and quality of generations

    Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference

    Full text link
    Large number of ReLU and MAC operations of Deep neural networks make them ill-suited for latency and compute-efficient private inference. In this paper, we present a model optimization method that allows a model to learn to be shallow. In particular, we leverage the ReLU sensitivity of a convolutional block to remove a ReLU layer and merge its succeeding and preceding convolution layers to a shallow block. Unlike existing ReLU reduction methods, our joint reduction method can yield models with improved reduction of both ReLUs and linear operations by up to 1.73x and 1.47x, respectively, evaluated with ResNet18 on CIFAR-100 without any significant accuracy-drop

    Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment

    Full text link
    With the rapid growth of computing powers and recent advances in deep learning, we have witnessed impressive demonstrations of novel robot capabilities in research settings. Nonetheless, these learning systems exhibit brittle generalization and require excessive training data for practical tasks. To harness the capabilities of state-of-the-art robot learning models while embracing their imperfections, we present Sirius, a principled framework for humans and robots to collaborate through a division of work. In this framework, partially autonomous robots are tasked with handling a major portion of decision-making where they work reliably; meanwhile, human operators monitor the process and intervene in challenging situations. Such a human-robot team ensures safe deployments in complex tasks. Further, we introduce a new learning algorithm to improve the policy's performance on the data collected from the task executions. The core idea is re-weighing training samples with approximated human trust and optimizing the policies with weighted behavioral cloning. We evaluate Sirius in simulation and on real hardware, showing that Sirius consistently outperforms baselines over a collection of contact-rich manipulation tasks, achieving an 8% boost in simulation and 27% on real hardware than the state-of-the-art methods, with twice faster convergence and 85% memory size reduction. Videos and code are available at https://ut-austin-rpl.github.io/sirius
    • …
    corecore